Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
“Big data” gives markets access to previously unmeasured characteristics of individual agents. Policymakers must decide whether and how to regulate the use of this data. We study how new data affects incentives for agents to exert effort in settings such as the labor market, where an agent's quality is initially unknown but is forecast from an observable outcome. We show that measurement of a new covariate has a systematic effect on the average effort exerted by agents, with the direction of the effect determined by whether the covariate is informative about long‐run quality versus a shock to short‐run outcomes. For a class of covariates satisfying a statistical property that we callstrong homoskedasticity, this effect is uniform across agents. More generally, new measurements can impact agents unequally, and we show that these distributional effects have a first‐order impact on social welfare.more » « less
-
An agent has access to multiple information sources, each modeled as a Brownian motion whose drift provides information about a different component of an unknown Gaussian state. Information is acquired continuously—where the agent chooses both which sources to sample from, and also how to allocate attention across them—until an endogenously chosen time, at which point a decision is taken. We demonstrate conditions on the agent's prior belief under which it is possible to exactly characterize the optimal information acquisition strategy. We then apply this characterization to derive new results regarding: (1) endogenous information acquisition for binary choice, (2) the dynamic consequences of attention manipulation, and (3) strategic information provision by biased news sources.more » « less
-
null (Ed.)We summarize our recent work that uses machine learning techniques as a complement to theoretical modeling, rather than a substitute for it. The key concepts are those of the completeness and restrictiveness of a model. A theory's completeness is how much it improves predictions over a naive baseline, relative to how much improvement is possible. When a theory is relatively incomplete, machine learning algorithms can help reveal regularities that the theory doesn't capture, and thus lead to the construction of theories that make more accurate predictions. Restrictiveness measures a theory's ability to match arbitrary hypothetical data: A very unrestrictive theory will be complete on almost any data, so the fact that it is complete on the actual data is not very instructive. We algorithmically quantify restrictiveness by measuring how well the theory approximates randomly generated behaviors. Finally, we propose "algorithmic experimental design" as a method to help select which experiments to run.more » « less
-
null (Ed.)Abstract We develop a model of social learning from complementary information: short-lived agents sequentially choose from a large set of flexibly correlated information sources for prediction of an unknown state, and information is passed down across periods. Will the community collectively acquire the best kinds of information? Long-run outcomes fall into one of two cases: (i) efficient information aggregation, where the community eventually learns as fast as possible; (ii) “learning traps,” where the community gets stuck observing suboptimal sources and information aggregation is inefficient. Our main results identify a simple property of the underlying informational complementarities that determines which occurs. In both regimes, we characterize which sources are observed in the long run and how often.more » « less
An official website of the United States government
